Active Appearance Model (AAMs)

Introduction

Active Appearance Models (AAMs) are generative parametric models that describe the shape and appearance of a certain object class; e.g., the human face. In a typical application, these models are matched against input images to obtain the set of parameters that best describe a particular instance of the object being modelled.

The aim of this notebook is to showcase the basic functionality provided by the package menpo.aam.

Build a simple AAM

We start by loading the set of landmarked images that we will be used to build the AAM; in this case, the LFPW training set. Note that the images are first cropped (in order to save valuable memory space), then rescaled to a consistent face size (necessary for correctly extracting features) and, finally, converted to grayscale (note that the grayscale convertion is optional, RGB images can also be used and, as a matter of fact, in principle, any n channel image representation (such as depth and shape images or feature images such as HoG) can also be used).


In [ ]:
import menpo.io as pio
from menpo.landmark import labeller, ibug_68_points, ibug_68_trimesh

images = []
# load landmarked imarges
for i in pio.import_images('/Users/joan/PhD/DataBases/lfpw/trainset/*.png'):
    # crop them
    i.crop_to_landmarks_proportion(0.1)
    # convert them to greyscale
    if i.n_channels == 3:
        images.append(i.as_greyscale(mode='luminosity'))
    else:
        images.append(i)

In [ ]:
%matplotlib inline
#visualize the first image
images[0].landmarks['PTS'].view()

Next we will use the aambuilder function to build an AAM using the previous images. Most of the time, in order to do that we will need to define a dictionary whose fields specify the type of AAM that will be built. Although, in this first example, there is no real need to explicitely define the entire dictionary, since we will be using a fairly standard setting, however, for the sake of clarity we will define such dictionary explicitely stating all possible options.


In [ ]:
from menpo.aam.base import aam_builder
from menpo.transform.piecewiseaffine import PiecewiseAffineTransform

# set options. Most options are set to their default 
# values. Obviously, it is in not required to set all 
# the options to their default values but this is 
# done here in favour of clarity
build_options = {'group': 'PTS', 
                 'label': 'all',
                 'interpolator': 'scipy',
                 'diagonal_range': None,
                 'boundary': 3,
                 'transform_cls': PiecewiseAffineTransform,
                 'trilist': None,
                 'patch_size': None,
                 'n_levels': 1, 
                 'downscale': 2,
                 'scaled_reference_frames': False,
                 'feature_type': None,
                 'max_shape_components': 25,
                 'max_appearance_components': 250}
# build aam
aam = aam_builder(images, **build_options)

The returned AAM object allows us to, for example, generate random AAM instances. Note that we can also specify particular shape and appearance parameters values from which to generate the instance.

Notes:

  • In the final verison of this Notebook one should be able to also print the AAM object.

In [ ]:
%matplotlib inline
# random instance
aam.instance().view() 
# random instance
aam.random_instance().view_new() 
# handpicked intance
aam.instance(shape_weights=[2,1.5,-1], 
             appearance_weights=[2,2,-3,-3,3]).view_new()

As with any other python object we can also save the AAM for later use using pickle.


In [ ]:
import pickle

# set path
path =  '/data/PhD/Models/'
# set name
name = 'aam_' + 'lfpw_' + 'default'

# save aam
pickle.dump({'aam': aam}, open(path + name, 'wb'))

And load it back.


In [ ]:
import pickle

# set path
path =  '/data/PhD/Models/'
# set name
name = 'aam_' + 'lfpw_' + 'default'

# load aam
obj = pickle.load(open(path + name, "rb"))
aam = obj['aam']

Fit a simple AAM

More importantly, we can now fit the AAM object to a novel image using all the methods in the menpo.lucaskanade package. We will start by loading the novel image (i.e. an image that was not used for building the AAM), for example, the BreakingBad image in the Menpo's data folder. Note that the current implementation requires the image to be greyscale (because the AAM was built using greayscale images) in order to be fitted. Finally, we also crop the BreakindBad image to make the landmarks easily visible to the user.


In [ ]:
breakingbad = pio.import_builtin_asset('breakingbad.jpg')
breakingbad = breakingbad.as_greyscale(mode='luminosity')

In [ ]:
%matplotlib inline
breakingbad.landmarks['PTS'].view()

In [ ]:
%matplotlib inline
breakingbad.crop_to_landmarks_proportion(0.3, group='PTS')
breakingbad.landmarks['PTS'].view()

Next, we will need to initialize the AAM Lucas-Kanade fitting object. We are allowed to set different options regarding the fitting procedure, from the specific algorithm to be used to the number of components that will be optimized. Although in this example we will only use the default fitting setting, for the sake of clarity, we will define an explicit dictionary containing all the options.


In [ ]:
from menpo.aam.fitter import LucasKanadeAAMFitter
from menpo.lucaskanade.appearance import AlternatingInverseCompositional
from menpo.lucaskanade.residual import LSIntensity
from menpo.transform.modeldriven import OrthoMDTransform
from menpo.transform.affine import SimilarityTransform

# set fitting options. As before we favour 
# clarity by explicitely stating all possible
# options
fitting_setting_options = \
    {'lk_algorithm': AlternatingInverseCompositional,
     'residual': LSIntensity,
     'md_transform_cls': OrthoMDTransform,
     'global_transform_cls': SimilarityTransform,
     'n_shape': 6, 
     'n_appearance': 100}

# initialize Lucas-Kanade aam fitting
lk_aam_fitter = LucasKanadeAAMFitter(aam, **fitting_setting_options)

We can now attempt to fit the greayscale version of the BreakingBad image using the method fit_image in the LK AAM fitter object. Note that, because the BreakingBad image is annotated, we will initialize the fitting procedure by perturbing the original ground truth annotations with white noise. Again, in order to favor clarity, an explicit dictionary will be used to set this method's options (do not worry, there will not be more explicit dictionaries).


In [ ]:
fitting_options = {'group': 'PTS',
                   'label': 'all',
                   'noise_std': 0.0,
                   'max_iters': 60,
                   'rotation': False,
                   'verbose': True, 
                   'view': True}

# fit test image
aam_fitting = lk_aam_fitter.fit_image(breakingbad, **fitting_options)

Not surprisingly the alignment did not quite work. Do not worry! This was just the most basic AAM we could build ;-).

Note that the fit_image method returns a Fitting object. Apart from containing the final result, i.e. the fitted shape, Fitting objects allow us to analyse and visualize interesting aspects of the fitting procedure, e.g:

  • Obtain the final fitting result:

In [ ]:
final_shape = aam_fitting.final_shape

%matplotlib inline
final_shape.view()
  • Visualize it on top of the fitted image:

In [ ]:
%matplotlib inline
aam_fitting.view_final_fitting()
  • Visualize the initial position from which the fitting started:

In [ ]:
%matplotlib inline
aam_fitting.view_initialization()
  • Visualize the sequence of warped images:

In [ ]:
%matplotlib qt
aam_fitting.view_warped_images()
  • Visualize the sequence of apperance reconstructions:

In [ ]:
%matplotlib qt
aam_fitting.view_appearance_reconstructions()
  • Visualize the sequence of error images:

In [ ]:
%matplotlib qt
aam_fitting.view_error_images()
  • Plot the error evolution, i.e. the error between the ground truth annotation and the fitted shape per iteration. Note that this is possible only because the original image is annotated.

In [ ]:
%matplotlib inline
aam_fitting.plot_error(color_list=['b'], marker_list=['*'])

Building a more powerful AAM

The ammbuilder function allows us to automatically build many different types of AAMs by simply specifying the right options.

For example, using the same LFPW training data and similar fitting options as before, we can build and fit a Multiresolution HoG-AAM that will have no problem in correctly fitting the BreakingBad image:


In [ ]:
# delete previous aam in order to save 
# valuable memory space
del aam

In [ ]:
# label images
labeller(images, 'PTS', ibug_68_trimesh);

def hog_closure(image):
    return image.features.hog(window_step_vertical=3, window_step_horizontal=3)

# build aam
aam = aam_builder(images, 
                  group='PTS', 
                  trilist=images[0].landmarks['ibug_68_trimesh'].lms.trilist, 
                  n_levels=3, 
                  downscale=1.5, 
                  feature_type=hog_closure, 
                  max_shape_components=25, 
                  max_appearance_components=250)

In [ ]:
# lowest resolution level
%matplotlib inline
aam.instance(level=0).view(channels=0) 
# middle resolution level
aam.instance(level=1).view_new(channels=0)
# highest resolution level
aam.instance().view_new(channels=0)

In [ ]:
# initialize Lucas-Kanade aam fitting
lk_aam_fitter = LucasKanadeAAMFitter(aam, 
                                     n_shape=[3, 6, 12], 
                                     n_appearance=50)

In [ ]:
# fit test images
aam_fitting = lk_aam_fitter.fit_image(breakingbad, 
                                      max_iters=60)

In [ ]:
%matplotlib inline
aam_fitting.view_initialization()

In [ ]:
%matplotlib inline
aam_fitting.view_final_fitting()

In [ ]:
%matplotlib qt
aam_fitting.view_warped_images()

In [ ]:
%matplotlib qt
aam_fitting.view_error_images(channels=0)

In [ ]:
%matplotlib inline
aam_fitting.plot_error(color_list=['b'], marker_list=['*'])